Oracle-guided scheduling for controlling granularity in implicitly parallel languages
نویسندگان
چکیده
A classic problem in parallel computing is determining whether to execute a thread in parallel or sequentially. If small threads are executed in parallel, the overheads due to thread creation can overwhelm the benefits of parallelism, resulting in suboptimal efficiency and performance. If large threads are executed sequentially, processors may spin idle, resulting again in suboptimal efficiency and performance. This “granularity problem” is especially important in implicitly parallel languages, where the programmer expresses all potential for parallelism, leaving it to the system to exploit parallelism by creating threads as necessary. Although this problem has been identified as an important problem, it is not well understood—broadly applicable solutions remain elusive. In this paper, we propose techniques for automatically controlling granularity in implicitly parallel programming languages to achieve parallel efficiency and performance. To this end, we first extend a classic result, Brent’s theorem (a.k.a. the work-time principle) to include thread-creation overheads. Using a cost semantics for a general-purpose language in the style of lambda calculus with parallel tuples, we then present a precise accounting of thread-creation overheads and bound their impact on efficiency and performance. To reduce such overheads, we propose an oracle-guided semantics by using estimates of the sizes of parallel threads. We show that, if the oracle provides accurate estimates in constant time, then the oracle-guided semantics reduces the thread-creation overheads for a reasonably large class of parallel computations. We describe how to approximate the oracle-guided semantics in practice by combining static and dynamic techniques. We require the programmer to provide the asymptotic complexity cost for each parallel thread and use runtime profiling to determine hardware-specific constant factors. We present an implementation the proposed approach as an extension of the Manticore compiler for Parallel ML. Our empirical evaluation shows that our techniques can reduce thread-creation overheads, leading to good efficiency and performance.
منابع مشابه
Scheduling of Multiple Autonomous Guided Vehicles for an Assembly Line Using Minimum Cost Network Flow
This paper proposed a parallel automated assembly line system to produce multiple products having multiple autonomous guided vehicles (AGVs). Several assembly lines are configured to produce multiple products in which the technologies of machines are shared among the assembly lines when required. The transportation between the stations in an assembly line (intra assembly line) and among station...
متن کاملPerformance Evaluation of Three Dynamic Load Balancing Algorithms on Spmd Model
In this research paper, we focus on the performance of different task migration and load balancing algorithms on SPMD model based on their controlling parameters. A network of workstations has been chosen and PVM libraries have been used for implementation. Matrix multiplication has been selected as the application. Three algorithms have been investigated namely, fixed granularity, variable gra...
متن کاملA Performance Study of Locking Granularity in Shared-Nothing Parallel Database Systems
Locking granularity refers to the size of a lockable data unit, called a "granule", in a database system. Fine granularity improves system performance by increasing concurrency level but it also increases lock management overhead. Coarse granularity, on the other hand, sacrifices system performance but lowers the lock management cost. This paper explores the impact of granule size on performanc...
متن کاملEmpirical Study of Variable Granularity and Global Centralized Load Balancing Algorithms
Task migration and load sharing algorithms are two load balancing strategies that are essential in distributed memory multiprocessor as well as in multicomputer environments. Dynamic load balancing is more suitable in heterogeneous systems. Various load sharing and global centralized algorithms have been proposed in the literature. These algorithms demand careful investigation about their suita...
متن کاملPro ling scheduling strategies on the GRIP parallel
It is widely claimed that functional languages are particularly suitable for programming parallel computers. A claimed advantage is that the programmer is not burdened with details of task creation, placement, scheduling, and synchronisation, these decisions being taken by the system instead. Leaving aside the question of whether a pure functional language is expressive enough to encompass all ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Funct. Program.
دوره 26 شماره
صفحات -
تاریخ انتشار 2016